1 About Slick

SLICK is a decision analysis tool that presents the outcomes of potential policy options across various states of nature. The App allows for the simultaneous presentation of various performance metrics and can account for uncertainty in the states of nature. SLICK is interactive and allows users to filter results live in order to explore robustness and performance.

While SLICK can be applied to any decision analysis context it was specifically designed to investigate the performance of fisheries management procedures tested by management strategy evaluation (MSE).

Importantly the App is platform agnostic: results arising from any MSE framework that are formatted in a compatible SLICK object can be loaded and vizualised in the App. The MSE R packages DLMtool and MSEtool are SLICK-compatible and include tools to convert MSE results to the SLICK format.

2 Purpose of this document.

This document:

  • Explains how to access and use SLICK
  • Describes SLICK outputs
  • Provides a description of how to format MSE results for use in SLICK

3 Quick Start

For a demonstration of SLICK go to the online App hosted here

4 Introduction

4.1 Management Strategy Evaluation

Management Strategy Evaluation (MSE) is an approach for establishing simple rules for managing a resource and then simulation testing their robustness to various hypothetical scenarios for system dynamics (Butterworth and Punt 1999; Cochrane et al. 1998).

Often referred to as Management Procedures (MPs, aka Harvest Strategies) these rules typically use streamlined data to generate management advice such as a Total Allowable Catch (TAC).

In fisheries, MSE differs substantially from conventional stock assessment in how models of fisheries dynamics are used to derive management advice. In conventional stock assessment, fisheries dynamics models are used to directly derive management advice. For example, setting a TAC commensurate with fishing mortality rate at maximum sustainable yield. MSEs typically use a greater number of fitted fisheries dynamics models (‘operating models’) that span a much wider range of uncertainties in order to test the robustness of MPs. The focus in MSE is robustness accounting for feedbacks between management options and the system rather than establishing a single ‘best’ model of the resource.

Consequently, MSE allows managers and stakeholders to establish a comparatively simple management rule (an MP), understand its performance and have confidence that it can perform adequately even in the face of uncertainties in system dynamics.

Punt et al. (2014) provide a comprehensive summary of the history of MSE implementations.

4.2 SLICK Presentation of MSE Results

MSEs have four basic axes over which results are generally presented:

  1. operating models (a state of nature or scenario for real system dynamics)
  2. management procedures (MP - a management option, aka. harvest strategy, management procedures)
  3. performance metrics (aka. cost function, utility measure)
  4. Uncertainty within an operating model (multiple simulations for each discrete state of nature)

SLICK allows users to filter operating models, performance metrics and management procedures in order to explore robustness and characterize performance. Importantly, SLICK is MSE-platform agnostic. Provided MSE practitioners format their results in a compatible SLICK object, these can be loaded to the App.

SLICK presents MSE results in 11 Figures designed to inform decision making by revealing the absolute and comparative performance of candidate management procedures .

5 Accessing the App

5.1 Online

SLICK is freely available online.

5.2 Offline

You can also run the App locally on your computer. To do so install the R package and use the SLICK() function:

library(SLICK)
SLICK()

6 Using the App

6.1 Performance Comparison 1

The first page provides a top-sheet overview of performance among candidate MPs. Individual radar plots provide MP specific performance outcomes and a larger radar plot provides direct comparisons among candidate MPs. These are deterministic (point values) performance metrics which are scaled from 0 to 100 where 100 is better performance.

MP-specific radar plots include a value which is the mean score among all selected performance metrics.

These plots can include a large number of performance metrics. However, where possible it is best to select a small number - when radar plots have a large numbers of metrics the order in which they are presented can strongly determine the apparent size of the shaded area.

6.2 Performance Comparison 2

Line graphs provide an alternative comparison of candidate MP performance. Performance among the various metrics increases along the x-axis with aggregate mean performance among all metrics presented as large point at the top of the plot.

6.3 Performance Comparison 3

As a compliment to performance comparison 2, the same information can be presented in floating bars that better characterize the range of performance outcomes among candidate MPs.

6.4 Projection of trade-off

A standard diagnostic for sustainable exploitation is the Kobe plot which describes MP biomass performance relative to a target level on the x-axis and exploitation rate performance relative to a target on the y-axis. This plot distinguishes between the biological status of the stock and its probable trajectory.

6.5 State Projection 1

In many decision makings context there is a state variable of interest (e.g. population numbers) that like performance metrics has a projected future. Unlike performance metrics, state variables also have a historical reconstruction that provides important context for projected outcomes.

6.6 Performance Trade-offs 1

A single Kobe-like plot summarizes the outcomes of the MSE projection in the final projection year. This plot helps to summarize long term biomass and exploitation performance to better highlight constrast in the sustainability among MPs.

6.7 Performance Trade-offs 2

An alternative summary of Kobe-type biomass and exploitation metrics is provided that attempts to rank candidate MPs to further highlight critical differences in sustainability.

6.8 Performance Trade-offs 3

Box plots focus on the uncertainty in performance outcomes among candidate MPs, where possible highlighting those that obtain the best performance.

6.9 Performance Trade-offs 4

An important feature of MSE is that it focuses on robustness of MPs among various operating models. The fourth set of performance trade-off plots presents results disaggregated by operating model to help identify those scenarios that are pinch points for MP performance.

6.10 Performance Comparison 5

A radar plot array among MPs and operating models reveals scenarios that affect the absolute and relative performance of candidate MPs.

6.11 State Projection 2

State variable projections are provided across a range of operating models and MPs.

7 Loading SLICK examples

A number of MSEs have converted their results to a SLICK objects which may be loaded into the App. Currently these case studies are available from a shared drive

8 Making custom slick objects

Is is straightforward for MSE users to present their results using SLICK. To do so they must:

  1. download the SLICK R package
  2. Create a blank SLICK object
  3. Populate the various slots of the object with their data, text and labelling
  4. Save the SLICK object and upload it to the App

SLICK objects have the following dimensions:

  • deterministic performance metrics (nD)
  • stochastic performance metrics (nS)
  • projected performacne metrics (nP)
  • management options (nMO)
  • simulations (nsim)
  • projection years (nProjYr)
  • state variables (nStateVar)
  • historical years (nHistYr)

When you create a blank SLICK object you must specify the ‘shape’ of the results according to these dimensions:

library(SLICK)
SLICK()
mySLICKobj = NewSLICK(nPerf=list(nD=5,nS=6,nP=7), # The number of deterministic (nD)
                      nMOs=5,
                      nsim=10,
                      nProjYr=50,
                      nStateVar=2,
                      nHistYr=55, 
                      Design=as.matrix(expand.grid(1:2,1:2)
                     )

You can also see a final argument for the NewSLICK function and that is ‘Design’. This is the design matrix for the operating models: a table with a row for each operating model and a column for each factor type (e.g. natural mortality rate, resilience etc) containing the level of each factor (this is described in further detail below).

Once you have a blank SLICK object you can process your MSE results in R and add them to the blank object. Below is a guide to each slot of the object and the correct format for these.

The SLICK object is of class S3 (has slots that are accessed via the $ operator). The structure of the SLICK object and the format of each slot is oulined in Figure X.

8.1 OM - Operating Model

All information about the various operating models is included in this slot. Operating models are often organized as orthogonal grids of various factors that represent axes of uncertainties (e.g. natural mortality, resilience, growth etc). Each of these factors can have many levels. Additionally operating models may also include reference (central uncertainties) and robustness sets. To accomodate such designs (an any other configuration), SLICK uses a design grid to describe all operating models (rows) and their relevent levels of each factor (columns).

The OM slot contains five components:

  • Design (a data frame representing the operating model design matrix with a row for each OM and a column for each factor)
  • Factor_Labels (a character vector of the names of the factors)
  • Labels (abbreviated factor-level labels for tabulation)
  • Codes (further abbreviated factor-level labels for plotting purposes)
  • Description (text description of each factor-level)

Here is an example of an OM slot for a SLICK object that has three reference set factors (natural mortality rate, resilience, stock depletion):

SLICKobj$OM
#> $Design
#>    Natural Mortality Resilience Stock Depletion Robustness
#> 1                  1          1               1          1
#> 2                  2          1               1          1
#> 3                  1          2               1          1
#> 4                  2          2               1          1
#> 5                  1          3               1          1
#> 6                  2          3               1          1
#> 7                  1          1               2          1
#> 8                  2          1               2          1
#> 9                  1          2               2          1
#> 10                 2          2               2          1
#> 11                 1          3               2          1
#> 12                 2          3               2          1
#> 13                 1          1               1          2
#> 14                 1          1               1          3
#> 15                 1          1               1          4
#> 16                 1          1               1          5
#> 17                 1          1               1          6
#> 
#> $Factor_Labels
#> [1] "Natural Mortality" "Resilience"        "Stock Depletion"   "Robustness"       
#> 
#> $Description
#> $Description[[1]]
#> [1] "M=0.2" "M=0.3"
#> 
#> $Description[[2]]
#> [1] "h=0.5" "h=0.7" "h=0.9"
#> 
#> $Description[[3]]
#> [1] "Dep=0.1" "Dep=0.3"
#> 
#> $Description[[4]]
#> [1] "Reference Case" "L50=0.5"        "Vmaxlen=0.1"    "Cobs=0.5"       "Perr=0.5"       "AC=0.95"       
#> 
#> 
#> $Codes
#> $Codes[[1]]
#> [1] "M2" "M3"
#> 
#> $Codes[[2]]
#> [1] "h5" "h7" "h9"
#> 
#> $Codes[[3]]
#> [1] "D1" "D3"
#> 
#> $Codes[[4]]
#> [1] "Ref_case" "mat_low"  "dome"     "h_Cerr"   "h_Perr"   "h_AC"    
#> 
#> 
#> $Labels
#> $Labels[[1]]
#> [1] "M=0.2" "M=0.3"
#> 
#> $Labels[[2]]
#> [1] "h=0.5" "h=0.7" "h=0.9"
#> 
#> $Labels[[3]]
#> [1] "Dep=0.1" "Dep=0.3"
#> 
#> $Labels[[4]]
#> [1] "Ref_Case"    "L50=0.5"     "Vmaxlen=0.1" "Cobs=0.5"    "Perr=0.5"    "AC=0.95"

There are 2 levels of natural mortality rate, 3 levels of resilience and 2 levels of depletion. Since these are fully orthogonal the product is 2 x 3 x 2 (12) reference operating models. Additionally there is a fourth factor for robustness / reference set operating models. Since there are 5 robustness OMs the total number of operating models (rows in the design grid) is 17.

The Labels, Codes and Description slots are all hierarchical lists which vectors for each level name nested within factor type.

8.2 Perf - Performance Metrics

There are three types of performance metrics:

  • Det - Deterministic (point values, one per OM, MP and deterministic metric) (e.g. mean catches in 2030 or 5th percentile of biomass in year 2050 etc.)
  • Stoch - Stochastic (a value per simulation, OM, MP and stochastic metric) (e.g. average annual variability in catches 2021-2040, catch in 2040)
  • Proj - Projected (a value per simulation, OM, MP, time step and projected metric) (e.g. spawning biomass relative to target levels, F relative to target levels)
names(SLICKobj$Perf)
#> [1] "Det"   "Stoch" "Proj"

These performance metric types have varing roles in the SLICK app, featuring in particular results plots only. For example, radar plots (e.g. Performance Comparison 1) are all deterministic performance metrics. Box plots (e.g. Performance Trade-offs 3) are stochastic performance metrics. Any plots that include metrics over time are plotting Projected performance metrics (e.g. Projection of Trade-offs).

The Det, Stoch and Proj slots can include varying numbers of metrics - they do not have to match. These slots can also be left empty however without all three some of the SLICK outputs will be missing.

There are some constraints in how these metrics should be scaled for use in the App. In order to make presentation and ranking possible within the App, Deterministic and Stochastic performance metrics should take on values between 0 and 100, where 0 is poor performance and 100 is ideal performance. For example it would be necessary to take the reciprocal of ‘probability of overfishing’ (where less is better) to derive ‘probability of not overfishing’ (where more is then better).

Projected performance metrics are used in providing Kobe-like outputs among several SLICK plots. The default is to make the first projected metric SSB relative to SSBMSY (Kobe x-axis) and the second projected metric exploitation rate relative to MSY levels (F/FMSY).

8.2.1 Det - Deterministic performance metrics

As in the operating model slot OM, each of the performance metric types (Det, Stoch and Proj) have Labels, Codes and Description slots that provide labels for plotting axes, further abbreviated codes for other interface features and full descriptions, respectively. These are simply character vectors:

names(SLICKobj$Perf$Det)
#> [1] "Labels"      "Codes"       "Description" "Values"      "RefPoints"   "RefNames"

SLICKobj$Perf$Det$Labels
#> [1] "Prob. AAVE < 20% (Years 1-50)"              "Prob. AAVY < 20% (Years 1-50)"              "Prob. Yield > 0.5 Ref. Yield (Years 41-50)" "Prob. SB > 0.1 SBMSY (Years 1 - 50)"        "Prob. SB > 0.5 SBMSY (Years 1 - 50)"        "Prob. SB > SBMSY (Years 1 - 50)"            "Prob. F < FMSY (Years 1 - 50)"              "Prob. Yield > 0.5 Ref. Yield (Years 1-10)"  "Mean Relative Yield (Years 1-50)"

SLICKobj$Perf$Det$Codes
#> [1] "AAVE"  "AAVY"  "LTY"   "P10"   "P50"   "P100"  "PNOF"  "STY"   "Yield"

SLICKobj$Perf$Det$Description
#> [1] "Average Annual Variability in Effort (Years 1-50)"       "Average Annual Variability in Yield (Years 1-50)"        "Average Yield relative to Reference Yield (Years 41-50)" "Spawning Biomass relative to SBMSY"                      "Spawning Biomass relative to SBMSY"                      "Spawning Biomass relative to SBMSY"                      "Probability of not overfishing (F<FMSY)"                 "Average Yield relative to Reference Yield (Years 1-10)"  "Yield relative to Reference Yield (Years 1-50)"

The values of the performance metrics are stored in the Values slot that is an array with the dimensions [OM, MP, deterministic metric]. In this example there are 17 operating models, 6 management procedures and 9 deterministic performance metrics:

dim(SLICKobj$Perf$Det$Values)
#> [1] 17  6  9

range(SLICKobj$Perf$Det$Values)
#> [1]   0 100

Of course it is easy to do a quick check in R that you are seeing the expected values (the numbers represent the MPs):

matplot(SLICKobj$Perf$Det$Values[,,7],ylab=SLICKobj$Perf$Det$Labels[7],xlab="Operating model",type='b') # A quick sketch of your Det$Values data for metric #7

Two optional slots RefPoints and RefNames are also included that allow for the prescription of reference levels for each deterministic metric. These are lists with an entry for each metric:

SLICKobj$Perf$Det$RefPoints
#> [[1]]
#> [1] NA
#> 
#> [[2]]
#> [1] NA
#> 
#> [[3]]
#> [1] NA
#> 
#> [[4]]
#> [1] NA
#> 
#> [[5]]
#> [1] NA
#> 
#> [[6]]
#> [1] NA
#> 
#> [[7]]
#> [1] 100  50
#> 
#> [[8]]
#> [1] NA
#> 
#> [[9]]
#> [1] NA

SLICKobj$Perf$Det$RefNames
#> [[1]]
#> [1] NA
#> 
#> [[2]]
#> [1] NA
#> 
#> [[3]]
#> [1] NA
#> 
#> [[4]]
#> [1] NA
#> 
#> [[5]]
#> [1] NA
#> 
#> [[6]]
#> [1] NA
#> 
#> [[7]]
#> [1] "Target" "Limit" 
#> 
#> [[8]]
#> [1] NA
#> 
#> [[9]]
#> [1] NA

In this example target and reference levels are provided for just one of the performance metrics (metric #7).

8.2.2 Stoch - Stochastic performance metrics

The formatting of the Stochastic performance metrics is identical to that of the deterministic metrics with the exception that the Values slot has an additional dimension for simulation (there are 48 simulations per operating model in this example).

There can be differing numbers of deterministic and stochastic performance metrics but in this example they include the same type of metric (the deteministic version was simply the mean of the stochastic values across simulations):

names(SLICKobj$Perf$Stoch)
#> [1] "Labels"      "Codes"       "Description" "Values"      "RefPoints"   "RefNames"

dim(SLICKobj$Perf$Stoch$Values)
#> [1] 48 17  6  9

plot(density(SLICKobj$Perf$Stoch$Values[,1,3,7],from=0,to=100,adj=0.5),xlab=SLICKobj$Perf$Stoch$Labels[7],main=SLICKobj$MP$Labels[3],ylab="Rel. Freq.") # Distribution of values for the third MP and 7th stochastic metric

8.2.3 Proj, Projected performance metrics

The formatting of the Projected performance metrics is identical to that of the stochastic metrics with the exception that the Values slot has additional dimensions for time (there are 50 projected years in this example) and a label for these times (Times)

names(SLICKobj$Perf$Proj)
#> [1] "Labels"      "Codes"       "Description" "Values"      "Times"       "RefPoints"   "RefNames"    "Time_lab"

SLICKobj$Perf$Proj$Times
#>  [1] 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070

dim(SLICKobj$Perf$Proj$Values)
#> [1] 48 17  6  4 50

matplot(SLICKobj$Perf$Proj$Times,t(SLICKobj$Perf$Proj$Values[,2,3,1,]),
        xlab=SLICKobj$Perf$Proj$Time_lab,ylab=SLICKobj$Perf$Proj$Labels[1],
        main=SLICKobj$MP$Labels[3], type="l", col="#00FF0030",lty=1,lwd=2) # Projection by simulation for the second OM, third MP and 1st stochastic metric

Any number of performance metrics can be included here. However, due to the default production of a Kobe-like plots (Performance Comparison 3, Performance Trade off 1 and 2), it is recommended that if they are to be reported, SSB relative to SSBMSY be placed in the first position (default x axis of the Kobe plot, greater than 1 is better) and F relative to FMSY be placed the second position (default y axis of the Kobe plot, smaller that 1 is better).

8.3 StateVar - State Variables

In general MSEs start with a historical reconstruction of system dynamic from which MPs are tested under projection. State variables are quantities that have persisted in the past as well as the future and in this way provide an historical perspective on future MP performance. Examples in fisheries MSE could include spawning stock biomass, spawning numbers, recruitment etc.

The formatting of the state variables is very similar to projected performance metrics but these also include historical years and an entry ‘TimeNow’ that demarks the end of the historical reconstruction and the start of the MSE projection.

names(SLICKobj$StateVar)
#> [1] "Labels"      "Codes"       "Description" "Values"      "Times"       "RefPoints"   "RefNames"    "TimeNow"     "Time_lab"

SLICKobj$StateVar$Times
#>   [1] 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070
SLICKobj$StateVar$TimeNow
#> [1] 2020

dim(SLICKobj$StateVar$Values)
#> [1]  48  17   6   2 100

matplot(SLICKobj$StateVar$Times,t(SLICKobj$StateVar$Values[,2,3,1,]),
        xlab=SLICKobj$StateVar$Time_lab,ylab=SLICKobj$StateVar$Labels[1],
        main=SLICKobj$MP$Labels[3], type="l", col="#FF000030",lty=1,lwd=2) # Projection by simulation for the second OM, third MP and 1st stochastic metric

abline(v=SLICKobj$StateVar$TimeNow) 

8.4 MP - Management Procedures

The management procedures slot contains the text required to populate figures and tables in the App. It has the same Labels, Codes and Descriptions slots as the various performance metrics slots:

SLICKobj$MP
#> $Labels
#> [1] "DCAC"      "AvC"       "Fratio"    "FMSYref"   "FMSYref50" "NFref"    
#> 
#> $Codes
#> [1] "DCAC"      "AvC"       "Fratio"    "FMSYref"   "FMSYref50" "NFref"    
#> 
#> $Description
#> [1] "Management Option 1 means this and that" "Management Option 2 means this and that" "Management Option 3 means this and that" "Management Option 4 means this and that" "Management Option 5 means this and that" "Management Option 6 means this and that"

In this case the Labels and Codes are identical but in other cases it might be necessary to provide more descriptive text to the MP labels.

8.5 Text

The text slot allows users to provide a title and introductory text for the main page of the App

SLICKobj$Text
#> $Title
#> [1] "SLICK"
#> 
#> $Sub_title
#> [1] "Decision Analysis"
#> 
#> $Introduction
#> $Introduction[[1]]
#> [1] "First paragraph of the Introduction"
#> 
#> $Introduction[[2]]
#> [1] "Second paragraph of the Introduction"

The Introduction slot is a list with a character string for each paragraph.

8.6 Misc

Finally, the Misc slot provides a place where the user can record their authorship, contact information.

SLICKobj$Misc
#> $Author
#> [1] "Anon"
#> 
#> $Contact
#> [1] NA
#> 
#> $Date
#> [1] "2020-08-18 14:42:02 PDT"
#> 
#> $Institution
#> [1] NA
#> 
#> $Logo
#> [1] NA
#> 
#> $App_axes
#>                     PM                     SN                     MO                    Sim                   Time 
#>   "Performance metric"      "Operating model" "Management Procedure"           "Simulation"                 "Year" 
#> 
#> $App_axes_code
#>    PM    SN    MO   Sim  Time 
#>  "PM"  "OM"  "MP" "Sim"  "Yr" 
#> 
#> $Cols
#> $Cols$MP
#> [1] "#00783A" "#90BD37" "#B2B3B7" "#F7951D" "#CF3C25" "#FECA0A"
#> 
#> $Cols$BG
#>      main       box    spider 
#>   "white" "#E4E9ED" "#E1E2E4" 
#> 
#> $Cols$KobeBG
#>         R     OFing        OF         G 
#> "#D8775D" "#F8DC7A" "#FDBD56" "#67C18B" 
#> 
#> $Cols$Kobeline
#> [1] "white"
#> 
#> $Cols$KobeText
#>         R     OFing        OF         G 
#> "#8A003C" "#988903" "#906600" "#019046" 
#> 
#> $Cols$KobePoint
#>         R     OFing        OF         G 
#> "#BD1018" "#EA6B1F" "#EA6B1F" "#007B3B" 
#> 
#> $Cols$RefPt
#>    target     limit     zeroC 
#> "#03A54F" "#EE1D23" "#93B6D9"

App_axes is a meta-label allowing users to control how the App is presented. For example for hurricane disaster relief, the user may want to use the term ‘Hurricane path’ rather than ‘Operating model’ or ‘Evacuation plan’ instead of ‘Management procedure’.

The Cols slot allows the user to control various color schemes to customize App presentation.


9 Acknowledgements

5W Designs

The Ocean Foundation

Blue Matter

10 References

Butterworth, D.S., Punt, A.E. 1999 Experiences in the evaluation and implementation of management procedures. ICES Journal of Marine Science, 5: 985-998, http://dx.doi.org/10.1006/jmsc.1999.0532.

Cochrane, K L., Butterworth, D.S., De Oliveira, J.A.A., Roel, B.A., 1998. Management procedures in a fishery based on highly variable stocks and with conflicting objectives: experiences in the South African pelagic fishery. Rev. Fish. Biol. Fisher. 8, 177-214.

Punt, A.E., Butterworth, D.S., de Moor, C.L., De Oliveira, J.A.A., Haddon, M. 2014. Management strategy evaluation: best practices. Fish and Fisheries. 17(2): 303:334.

11 Glossary

F B B0 SSB SSB0 F/FMSY B/BMSY SSB MSY MP CMP SN OM MSE